Concerning the problem that the accuracy and real-time effects of virtual-real registration in Augmented Reality (AR) based on vision are greatly affected by the changes of illumination, occlusion and perspective, which is easy to lead to failure of registration, a virtual-real registration method of natural features based on Binary Robust Invariant Scalable Keypoints-Speeded Up Robust Features (BRISK-SURF) algorithm was proposed. Firstly, Speeded Up Robust Features (SURF) feature extractor was used to detect the feature points. Then, Binary Robust Invariant Scalable Keypoints (BRISK) descriptor was used to describe the feature points in binary, and the feature points were matched accurately and efficiently by combining Hamming distance. Finally, the virtual-real registration was realized according to the homography relationship between images. Experiments were performed from the aspects of image feature matching and virtual-real registration. Results show that the average precision of BRISK-SURF algorithm is basically the same with that of SURF algorithm, is about 25% higher than that of BRISK algorithm, and the average recall of BRISK-SURF is increased by about 10% compared to that of BRISK algorithm; the result of the virtual-real registration method based on BRISK-SURF is close to the reference standard data with high precision and good real-time performance. The Experimental results illustrate that the proposed method has high recognition accuracy, registration precision and real-time effects for images with different illuminations, occlusions and perspectives. Besides, the interactive tourist resource presentation and experience system based on AR is realized by using the proposed method.
For the entity recognition of commodity attributes in clothing commodity title, a hybrid method combining Conditional Random Field (CRF) with entity boundary detecting rules was proposed. Firstly, the hidden entity hint character messages were obtained through a statistical method; secondly, statistical word indicators and their implications were interpreted with a granularity of character; thirdly, entity boundary detecting rules was proposed based on the entity hint characters and statistical word indicators; finally, a method for identifying threshold values in rules was proposed based on empirical risk minimization. In the comparison experiments with character-based CRF models, the overall precision, recall and F1 score were increased by 1.61%, 2.54% and 2.08% respectively, which validated the efficiency of the entity boundary detecting rule. The proposed method can be used in e-commerce Information Retrieval (IR), e-commerce Information Extraction (IE) and query intention identification, etc.
To alleviate the shortage of licensed spectrum resource, a method to design and implement the multi-user Cooperative Spectrum Sharing (CSS) mechanism was proposed based on the characteristics of asymmetric network information and selfishness of communication users. First, by modeling the CSS as a labor market, a modeling method for the multi-user contract-based CSS framework was investigated under the symmetric network information scenario. Then, to avoid the moral hazard problem due to the hidden-action of Secondary Users (SUs) after contract assignment, a contract-based CSS model was proposed to incentivize the contribution of SUs for ensuring spectrum sharing. The experimental results show that, when the direct transmission rate of Primary User (PU) is less than 0.2 b/s, in comparison with the case of non-cooperative spectrum sharing, the capacity of network is more than 3 times larger. The proposed multi-user contract-based CSS framework will put forward new ideas for efficient sharing and utilization of spectrum resource.
Visual Background extractor (ViBe)model for moving target detection cannot avoid interference caused by irregular flicker pixels noise in dynamic outdoor scenes. In order to solve the issue, a flicker pixels noise-suppression method based on ViBe model algorithm was proposed. In the initial stage of background model, a fixed standard deviation of background model samples was used as the threshold value to limit the range of background model samples and get suitable background model samples for each pixel. In the foreground detection stage, an adaptive detection threshold was applied to improve the accuracy of detection result. Edge inhibition of image edge background pixels was executed to avoid error background sample values updating to the background model in the background model update process. On the basis of above, morphological operation was added to fix connected components to get more complete foreground images. Finally, the proposed method was compared with the original ViBe algorithm and the ViBe's improvement with morphology post-processing on the results of multiple video sequences. The experimental results show that the flicker pixels noise-suppression method can suppress flicker pixels noise effectively and get more accurate results.
A big data benchmark is needed eagerly by customers, industry and academia, to evaluate big data systems, improve current techniques and develop new techniques. A number of prominent works in last several years were reviewed. Their characteristics were introduced and the shortcomings were analyzed. Based on that, some suggestions on building a new big data benchmark are provided, including: 1) component based benchmarks as well as end-to-end benchmarks should be used in combination to test different tools inside the system and test the system as a whole, while component benchmarks are ingredients of the whole big data benchmark suite; 2) workloads should be enriched with complex analytics to encompass different application requirements, besides SQL queries; 3) other than performance metrics (response time and throughput), some other metrics should also be considered, including scalability, fault tolerance, energy saving and security.
Focusing on the drawback that Discovering Maximum Frequent Itemsets Algorithm (DMFIA) has to generate lots of maximum frequent candidate itemsets in each dimension when given datasets with many candidate items and each maximum frequent itemset is not long, an improved Algorithm for mining Maximum Frequent Itemsets based of Frequent-Pattern tree (FP-MFIA) for mining maximum frequent itemsets based on FP-tree was proposed. According to Htable of FP-tree, this algorithm used bottom-up searches to mine maximum frequent itemsets, thus accelerated the count of candidates. Producing infrequent itemsets with lower dimension according to conditional pattern base of every layer when mining, cutting and reducing dimensions of candidate itemsets can largely reduce the amount of candidate itemsets. At the same time taking full advantage of properties of maximum frequent itemsets will reduce the search space. The time efficiency of FP-MFIA is at least two times as much as the algorithm of DMFIA and BDRFI (algorithm for mining frequent itemsets based on dimensionality reduction of frequent itemset) according to computational time contrast based on different supports. It shows that FP-MFIA has a clear advantage when candidate itemsets are with high dimension.
Focusing on the issue that the prediction model for task accuracy is deficient in the relationship of speed-accuracy tradeoff in human computer interaction, a method of predictive model for accuracy based on temporal constraint was proposed. The method studied the relationship between task accuracy and specified temporal constraint when users tried to complete the task with a specified amount of time in the computer user interface by controlled experiments, which was used to measure the human performance in temporal constraint tasks. A series of steering tasks with temporal constraint were designed in the experiment, which manipulated the tunnel amplitude, tunnel width and specified movement time. The dependent variable in the experiment was the task accuracy, which was quantifiable as lateral deviation of the trajectory. It was pointed out that the task accuracy was linearly related to tunnel width and steering speed (indicated as specified movement time divided by tunnel amplitude) by analyzing the experimental data from 30 participants. Finally, a quantitative model was established to predict the task accuracy based on the least-square regression in steering tasks with temporal constraint. The proposed model has a good fit with the real dataset, the goodness of fit is 0.857.
Electrocardiogram (ECG) signal has attracted widespread interest for the potential use in biometrics due to its ease-of-monitoring and individual uniqueness. To address the accuracy and real-time performance problem of human identification, a fast and robust ECG-based identification algorithm was proposed in this study, which was particularly suitable for miniaturized embedded platforms. Firstly, a dynamic-threshold method was used to extract stable ECG waveforms as template samples and test samples; then, based on a modified Dynamic Time Warping (DTW) method, the degree of difference between matching samples was calculated to reach a result of recognition. Considering that ECG is a kind of time-varying and non-stationary signals, ECG template database should be dynamically updated to ensure the consistency of the template and body status and further improve recognition accuracy and robustness. The analysis results with MIT-BIH Arrhythmia database and own experimental data show that the proposed algorithm has an accuracy rate at 98.6%. On the other hand, the average running times of dynamic threshold setting and optimized DTW algorithms on Android mobile terminals are about 59.5 ms and 26.0 ms respectively, which demonstrates a significantly improved real-time performance.
To solve the low running speed problem of Knuth39 random number generator, a Knuth39 parallelization method based on Many Integrated Core (MIC) platform was proposed. Firstly, the random number sequence of Knuth39 generator was divided into subsequences by regular interval. Then, the random numbers were generated by every thread from the corresponding subsequence's starting point. Finally, the random number sequences generated by all threads were combined into the final sequence. The experimental results show that the parallelized Knuth39 generator successfully passed 452 tests of TestU01, the results are the same as those of Knuth39 generator without parallelization. Compared with single thread on Central Processing Unit (CPU), the optimal speed-up ratio on MIC platform is 15.69 times. The proposed method improves the running speed of Knuth39 generator effectively, ensures the randomness of the generated sequences, and it is more suitable for high performance computing.
To improve the accuracy of bird sounds recognition in low Signal-to-Noise Ratio (SNR) environment, a new bird sounds recognition technology based on Radon Transform (RT) and Translation Invariant Discrete Wavelet Transform (TIDWT) from spectrogram after the noise reduction was proposed. First, an improved multi-band spectral subtraction method was presented to reduce the background noise. Second, short-time energy was used to detect silence of clean bird sound, and the silence was removed. Then, the bird sound was translated into spectrogram, RT and TIDWT were used to extract features. Finally, classification was achieved by Support Vector Machine (SVM) classifier. The experimental results show that the method can achieve better recognition effect even the SNR belows 10dB.
To deal with the texture detection and classification problem, a new texture description method based on self-similarity matrix of local spectrum energy of Gabor filters bank output was presented. Firstly, local frequency band and orientation information of texture template were obtained by convolving template with polar LogGabor filters bank. And then the self-similarities of different local frequency patches were measured and stored in a self-similarity matrix which was defined as the texture descriptor in this paper. At last this texture descriptor could be used in texture detection and classification. Due to the reflection of self-similarity level of different bands and orientations, the descriptor had lower dependency of Gabor filters bank parameters. In the tests, this descriptor produced better detection results than Homogeneous Texture Descriptor (HTD) and the other self-similarity descriptors and the accuracy of multi-texture classification could be up to 91%. The experimental results demonstrate that self-similarity matrix of local power spectrum is a kind of effective texture descriptor. The output of texture detection and classification can be used widely in the post texture analysis task, such as texture segmentation and recognition.